Skip to content

Conversation

bintang-aswam
Copy link

@bintang-aswam bintang-aswam commented Jun 17, 2025

Fixes #ISSUE_NUMBER

Description

Manually zero the gradients after updating weights by using machine epsilon for standard float (64-bit).

import sys

a.grad = loss*sys.float_info.epsilon
b.grad = loss*sys.float_info.epsilon
c.grad = loss*sys.float_info.epsilon
d.grad = loss*sys.float_info.epsilon

Checklist

  • The issue that is being fixed is referred in the description (see above "Fixes #ISSUE_NUMBER")
  • Only one issue is addressed in this pull request
  • Labels from the issue that this PR is fixing are added to this pull request
  • No unnecessary issues are included into this pull request.

…psilon for standard float (64-bit)

Manually zero the gradients after updating weights by using machine epsilon for standard float (64-bit).
Copy link

pytorch-bot bot commented Jun 17, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3399

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 4187d85 with merge base 8476a99 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants